Monte-Carlo Tree Search (MCTS) is an adversarial search paradigm that first found prominence with its success in the domain of computer Go. Early theoretical work established the game-theoretic soundness and convergence bounds for Upper Confidence bounds applied to Trees (UCT), the most popular instantiation of MCTS; however, there remain notable gaps in our understanding of how UCT behaves in practice. In this work, we address one such gap by considering the question of whether UCT can exhibit lookahead pathology -- a paradoxical phenomenon first observed in Minimax search where greater search effort leads to worse decision-making. We introduce a novel family of synthetic games that offer rich modeling possibilities while remaining amenable to mathematical analysis. Our theoretical and experimental results suggest that UCT is indeed susceptible to pathological behavior in a range of games drawn from this family.
translated by 谷歌翻译
我们解决了一项新的任务,即计数和检测。给定目标对象类的一些示例边界框,我们试图计数和检测目标类的所有对象。该任务与几个弹出对象计数相同的监督,但另外还输出对象边界框以及总体计数。为了解决这个具有挑战性的问题,我们介绍了一种新颖的两阶段训练策略和一种新颖的不确定性 - 少数光对象探测器:计数 - 滴定。前者的目的是生成伪距离界限框来训练后者。后者利用了前者提供的伪基真实,但采取了必要的步骤来解释伪基真实的不完美。为了验证我们在新任务上的方法的性能,我们介绍了两个名为FSCD-147和FSCD-LVIS的新数据集。两个数据集都包含具有复杂场景,每个图像多个对象类的图像,并且对象形状,大小和外观的巨大变化。我们提出的方法优于非常强大的基线,该基线是根据数量计数和少量对象检测而适应的,并且在计数和检测指标中均具有很大的余量。代码和模型可在\ url {https://github.com/vinairesearch/counting-detr}中获得。
translated by 谷歌翻译
本文在3D Point Cloud中介绍了一个新问题:很少示例实例分割。给定一些带注释的点云举例说明了目标类,我们的目标是在查询点云中细分该目标类的所有实例。这个问题具有广泛的实用应用,在重点实例分段注释非常昂贵的收集中。为了解决此问题,我们提出了测量形式 - 第一个用于3D点云实例分割的地球引导变压器。关键的想法是利用大地距离来应对LIDAR 3D点云的密度不平衡。 LIDAR 3D点云在物体表面附近茂密,在其他地方稀疏或空,使欧几里得距离较差以区分不同的物体。另一方面,大地测量距离更合适,因为它编码了场景的几何形状,该几何形状可以用作变压器解码器中注意机制的指导信号,以生成代表实例的不同特征的内核。然后将这些内核用于动态卷积以获得最终实例掩模。为了评估新任务上的测量形式,我们提出了两个常见的3D点云实例分割数据集的新拆分:ScannETV2和S3DIS。地球形式始终优于根据最新的3D点云实例分割方法的强大基线,并具有明显的余量。代码可从https://github.com/vinairesearch/geoformer获得。
translated by 谷歌翻译
我们提出了一种用于少量视频分类的新方法,该方法可以执行外观和时间对齐。特别是,给定一对查询和支持视频,我们通过框架级功能匹配进行外观对齐,以在视频之间达到外观相似性得分,同时利用时间订单保留的先验来获得视频之间的时间相似性得分。此外,我们介绍了一些视频分类框架,该框架利用了多个步骤的上述外观和时间相似性得分,即基于原型的训练和测试,以及电感和thresductive和转导的原型细化。据我们所知,我们的工作是第一个探索跨传感器的视频分类的工作。动力学和某些事物的V2数据集进行了广泛的实验表明,外观和时间对齐对于具有时间订单敏感性的数据集至关重要。我们的方法与两个数据集上的以前方法相似或更好的结果。我们的代码可在https://github.com/vinairesearch/fsvc-ata上找到。
translated by 谷歌翻译
在这项工作中,我们建议使用分布式样本,即来自目标类别外部的未标记样本,以改善几乎没有记录的学习。具体而言,我们利用易于可用的分布样品来驱动分类器,以避免通过最大化原型到分布样品的距离,同时最大程度地减少分布样品的距离(即支持,查询数据),以避免使用分类器。。我们的方法易于实施,不可知论的是提取器,轻量级,而没有任何额外的预训练费用,并且适用于归纳和跨传输设置。对各种标准基准测试的广泛实验表明,所提出的方法始终提高具有不同架构的预审计网络的性能。
translated by 谷歌翻译
Here, we demonstrate how machine learning enables the prediction of comonomers reactivity ratios based on the molecular structure of monomers. We combined multi-task learning, multi-inputs, and Graph Attention Network to build a model capable of predicting reactivity ratios based on the monomers chemical structures.
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
Machine Reading Comprehension has become one of the most advanced and popular research topics in the fields of Natural Language Processing in recent years. The classification of answerability questions is a relatively significant sub-task in machine reading comprehension; however, there haven't been many studies. Retro-Reader is one of the studies that has solved this problem effectively. However, the encoders of most traditional machine reading comprehension models in general and Retro-Reader, in particular, have not been able to exploit the contextual semantic information of the context completely. Inspired by SemBERT, we use semantic role labels from the SRL task to add semantics to pre-trained language models such as mBERT, XLM-R, PhoBERT. This experiment was conducted to compare the influence of semantics on the classification of answerability for the Vietnamese machine reading comprehension. Additionally, we hope this experiment will enhance the encoder for the Retro-Reader model's Sketchy Reading Module. The improved Retro-Reader model's encoder with semantics was first applied to the Vietnamese Machine Reading Comprehension task and obtained positive results.
translated by 谷歌翻译
RTE is a significant problem and is a reasonably active research community. The proposed research works on the approach to this problem are pretty diverse with many different directions. For Vietnamese, the RTE problem is moderately new, but this problem plays a vital role in natural language understanding systems. Currently, methods to solve this problem based on contextual word representation learning models have given outstanding results. However, Vietnamese is a semantically rich language. Therefore, in this paper, we want to present an experiment combining semantic word representation through the SRL task with context representation of BERT relative models for the RTE problem. The experimental results give conclusions about the influence and role of semantic representation on Vietnamese in understanding natural language. The experimental results show that the semantic-aware contextual representation model has about 1% higher performance than the model that does not incorporate semantic representation. In addition, the effects on the data domain in Vietnamese are also higher than those in English. This result also shows the positive influence of SRL on RTE problem in Vietnamese.
translated by 谷歌翻译
To the best of our knowledge, this paper made the first attempt to answer whether word segmentation is necessary for Vietnamese sentiment classification. To do this, we presented five pre-trained monolingual S4- based language models for Vietnamese, including one model without word segmentation, and four models using RDRsegmenter, uitnlp, pyvi, or underthesea toolkits in the pre-processing data phase. According to comprehensive experimental results on two corpora, including the VLSP2016-SA corpus of technical article reviews from the news and social media and the UIT-VSFC corpus of the educational survey, we have two suggestions. Firstly, using traditional classifiers like Naive Bayes or Support Vector Machines, word segmentation maybe not be necessary for the Vietnamese sentiment classification corpus, which comes from the social domain. Secondly, word segmentation is necessary for Vietnamese sentiment classification when word segmentation is used before using the BPE method and feeding into the deep learning model. In this way, the RDRsegmenter is the stable toolkit for word segmentation among the uitnlp, pyvi, and underthesea toolkits.
translated by 谷歌翻译